ContextCapture User Guide

Orthophoto/DSM

Produce interoperable raster layers for visualization and analysis in third-party GIS/CAD software or image processing tools.

Note: When using tiling, orthophoto/DSM productions generate one file per tile. You can use the command Merge orthophoto parts (available once ContextCapture production is complete) to create a single merged file for the orthophoto and for the DSM.

DSM output formats

  • TIFF/GeoTIFF: standard raster format with georeferencing information.
  • ESRI ASCII raster/ASC: common ASCII format for grid exchange.
  • XYZ: basic ASCII format with 3 columns, each line containing the X, Y and Z coordinates.

The reference 3D model geometry must be available to process DSM.

Orthophoto output formats

  • TIFF/GeoTIFF: standard raster format with georeferencing information.
  • JPEG: standard compressed image format.
  • KML Super-overlay: hierarchical image file format suited for real-time 3D display of very large orthophotos in Google Earth.

Options

  • Sampling distance: sampling distance option. Unit depends on the selected Spatial reference system.
  • Maximum image part dimension (px): define the maximum tile size for the resulting raster file.
  • Projection mode: define how the 2D data layer is processed from the 3D model (Highest point or Lowest point).
  • Orthophoto/DSM: enable or disable the corresponding production.
  • Color source:

    Optimized computation (visible colors): the best photos with visible colors band are selected according to the actual projection.

    Optimized computation (thermal): the best photos with thermal band are selected according to the actual projection.

    Reference 3D model visible color: keep the internal reference 3D model with visible colors as is (mush faster).

    Reference 3D model (thermal): keep the internal reference 3D model with thermal band as is (mush faster).

  • No data: pixel value or color representing no information.

The reference 3D model texture and geometry must be available to process orthophoto.

Annotation Detectors

First thing to set is the annotation type. Then, unless for specific cases described hereafter, when a user imports his own source, annotations will result from a processing involving a detector. Detectors have been trained on specific data and will be optimized/limited to run on the same kind of data (same nature, same environment, same data quality and resolution). Below are the various types of detector you will be able to use for annotation processing:

  • Image - object detector
  • Image - segmentation detector
  • Pointcloud - segmentation detector
  • Orthophoto detector
  • Orthophoto + DSM detector

Detector types are designed to run on specific annotation types. Example, an orthophoto detector will not be applicable in a 3D-segmentation job. A series of detector is available on dedicated Bentley Communities webpage.

This page is accessible from ContextCapture Master Annotations page (Format/Options).

In case no detector is fitting your purpose, you are invited to submit a help request via your personal portal and describe your needs.

Annotation type

2D Objects

This will detect objects in the images composing the block. Rectangular boxes will automatically be drawn around objects of interest in your images. These rectangular boxes can be viewed in Photo view and are recorded as a structured XML file in your production directory.

Objects of interest are the ones your detector has been trained to recognize. The path to this detector must be defined as below.

Only image-object detectors can be used to run this type of job.

2D Segmentation

Each image of your dataset will be classified according the detector you use. Classes of interest will be automatically extracted. It creates a PNG mask for each of your input images. Each pixel of this mask gets a class (unique pixel value) attributed. The series of masks is recorded in your production directory and linked to your block images by a single XML file. 2D Segmentation results can be reviewed in Photos tab. Classes of interest are the ones defined during your detector's training.

Only image-segmentation detectors can be used for this purpose.

3D Objectss

It delivers regular 3D-objects (boxes) around elements of interest. Elements of interest are the ones defined by your detector or the 2D-annotations you chose to import. 3D objects are recorded in XML format and can be exported to DGN. Tie-points and point clouds (if available) can be used to better identify individual neighbor instances. The minimum number of views defines the amount of unique 2D-detection required so an annotation becomes a 3D object.

3D Objects can be displayed in the 3D-view after processing and overlay an existing mesh-production.

Only image-object detectors can be used for this purpose.

3D Segmentation

This will execute a classification of 3D point cloud. Either the one usually processed by ContextCapture Engine (Reconstruction Reference Mesh) or the one that was imported in the block (Block Point Clouds). The raw result for this annotation job is a classified pointcloud. This classification is obtained after executing an appropriate detector or after importing annotation in XML format. The resulting classified pointcloud can be exported in LAS format but cannot be displayed in ContextCapture Master 3DView. ContextCapture Editor can manage resulting classified LAS file.

Out of 3D Segmentation, it is also possible to export individual 3D-objects. These 3D-objects are created based on 3D-pointcloud classification: neighbor points of same class derive 1 individual object. 3D individual objects are recorded as XML (see 3D Objects) and can be exported as DGN or OBJ files. 3D Objects derived from 3D Segmentation can be displayed in ContextCapture Master 3D-View.

The part of the classified pointcloud composing a 3D-object can also be exported as individual LAS files Individual 3D-object identification can be improved by computing additional 2D-Objects detection or by importing 2D-Object annotation. In such case, resulting annotation is still a classified point cloud, possibly joined by individual 3D-objects that have been optimized using 2D-objects detection from images. Only image-segmentation and pointcloud-segmentation detectors can be used for this type of job. Image-object detector will also be usable but limited to an assistance role to optimize neighbor objects separation.

Segmented orthophotos

The result of this job is a series of orthophotos (and DSM if selected) paired with a series of PNG masks. Orthophotos are traditional ContextCapture orthophotos and production settings are described in dedicated section. PNG masks are the result of a classification driven by the user-defined detector. PNG masks are geo-registered and overlaying RGB-orthophotos. Each pixel of the PNG masks has unique value corresponding to a class defined by the detector.

Detector must be defined at job submission stage. Following classification of PNG masks, a vectorization is possible as a Shapefile export. This transforms the class overlays in continuous polygon for optimized data management in GIS software.

Only orthophoto and orthophoto+DSM detectors will be usable for this type of job. The role of DSM is to optimize classification according to the height of the target elements. An example is building footprints: their extraction will be more accurate if the detector has been trained and is submitted on an orthphoto+DSM case rather than a single orthophoto.

Segmented orthophotos cannot be viewed in ContextCapture Master 3DView.

Segmented mesh

The result from segmented mesh is a traditional ContextCapture mesh for which the texture is fused with light color corresponding to detected elements. As a traditional mesh production, settings will be defined by the user: format, LOD, texture size, etc… and an additional setting will be defined for annotations.

Either a detector will be executed or an existing annotation will be imported as XML file to produce a mesh on-boarding class information in its texture. Minimum views to vote define the amount of detected occurrences required to turn RGB-texture into classified texture.

Segmented mesh can be viewed in ContextCapture Master 3DView (depending on the mesh format) as well as 2DSegmentation result if any.

Only image-segmentation and pointcloud-segmentation detectors can be used for this type of job.

Mesh patches

Mesh patches will execute the same job as segmented mesh but will allow vectorization of all the areas of the mesh's texture considered as classified. Each segmented texture patch will be exported as an OBJ file. All material contained within these OBJ files can be exported as a pointcloud.

Only image-segmentation and pointcloud segmentation detectors can be used for this type of job. Mesh patches cannot be viewed in ContextCapture Master 3DView

About LOD naming convention

3D mesh productions with LOD use a specific naming convention for node files according to tile name, level of detail resolution, and node path (for LOD trees).

For a node file "Tile_+000_+003_L20_000013.dae", the meaning is the following:

  • Tile_+000_+003: tile name.
  • L20: normalized level of detail which is related to ground resolution.
    Table 1. Level of detail and ground resolution correspondence table (sample)
    Level of detail Ground resolution (meter/pixel or unit/pixel)
    12 16
    13 8
    14 4
    15 2
    16 1
    17 0.5
    18 0.25
    19 0.125
    20 0.0625
  • 000013: optional node path, only included for LOD tree.

Each digit of the node path corresponds to a child index (zero-based) in the tree. For quadtree and octree productions, the child index unambiguously indicates the quadrant/octant of the child node.

Table 2. Example of node files for a production with LOD of type simple levels
Table 3. Example of node files for a production with LOD of type quadtree